26 research outputs found
Performance Analysis of Project-and-Forward Relaying in Mixed MIMO-Pinhole and Rayleigh Dual-Hop Channel
In this letter, we present an end-to-end performance analysis of dual-hop
project-and-forward relaying in a realistic scenario, where the source-relay
and the relay-destination links are experiencing MIMO-pinhole and Rayleigh
channel conditions, respectively. We derive the probability density function of
both the relay post-processing and the end-to-end signal-to-noise ratios, and
the obtained expressions are used to derive the outage probability of the
analyzed system as well as its end-to-end ergodic capacity in terms of
generalized functions. Applying then the residue theory to Mellin-Barnes
integrals, we infer the system asymptotic behavior for different channel
parameters. As the bivariate Meijer-G function is involved in the analysis, we
propose a new and fast MATLAB implementation enabling an automated definition
of the complex integration contour. Extensive Monte-Carlo simulations are
invoked to corroborate the analytical results.Comment: 4 pages, IEEE Communications Letters, 201
Towards Bridging the FL Performance-Explainability Trade-Off: A Trustworthy 6G RAN Slicing Use-Case
In the context of sixth-generation (6G) networks, where diverse network
slices coexist, the adoption of AI-driven zero-touch management and
orchestration (MANO) becomes crucial. However, ensuring the trustworthiness of
AI black-boxes in real deployments is challenging. Explainable AI (XAI) tools
can play a vital role in establishing transparency among the stakeholders in
the slicing ecosystem. But there is a trade-off between AI performance and
explainability, posing a dilemma for trustworthy 6G network slicing because the
stakeholders require both highly performing AI models for efficient resource
allocation and explainable decision-making to ensure fairness, accountability,
and compliance. To balance this trade off and inspired by the closed loop
automation and XAI methodologies, this paper presents a novel
explanation-guided in-hoc federated learning (FL) approach where a constrained
resource allocation model and an explainer exchange -- in a closed loop (CL)
fashion -- soft attributions of the features as well as inference predictions
to achieve a transparent 6G network slicing resource management in a RAN-Edge
setup under non-independent identically distributed (non-IID) datasets. In
particular, we quantitatively validate the faithfulness of the explanations via
the so-called attribution-based confidence metric that is included as a
constraint to guide the overall training process in the run-time FL
optimization task. In this respect, Integrated-Gradient (IG) as well as Input
Gradient and SHAP are used to generate the attributions for our
proposed in-hoc scheme, wherefore simulation results under different methods
confirm its success in tackling the performance-explainability trade-off and
its superiority over the unconstrained Integrated-Gradient post-hoc FL
baseline.Comment: Submitted for possible publication in IEEE. arXiv admin note:
substantial text overlap with arXiv:2210.1014
Explanation-Guided Deep Reinforcement Learning for Trustworthy 6G RAN Slicing
The complexity of emerging sixth-generation (6G) wireless networks has
sparked an upsurge in adopting artificial intelligence (AI) to underpin the
challenges in network management and resource allocation under strict service
level agreements (SLAs). It inaugurates the era of massive network slicing as a
distributive technology where tenancy would be extended to the final consumer
through pervading the digitalization of vertical immersive use-cases. Despite
the promising performance of deep reinforcement learning (DRL) in network
slicing, lack of transparency, interpretability, and opaque model concerns
impedes users from trusting the DRL agent decisions or predictions. This
problem becomes even more pronounced when there is a need to provision highly
reliable and secure services. Leveraging eXplainable AI (XAI) in conjunction
with an explanation-guided approach, we propose an eXplainable reinforcement
learning (XRL) scheme to surmount the opaqueness of black-box DRL. The core
concept behind the proposed method is the intrinsic interpretability of the
reward hypothesis aiming to encourage DRL agents to learn the best actions for
specific network slice states while coping with conflict-prone and complex
relations of state-action pairs. To validate the proposed framework, we target
a resource allocation optimization problem where multi-agent XRL strives to
allocate optimal available radio resources to meet the SLA requirements of
slices. Finally, we present numerical results to showcase the superiority of
the adopted XRL approach over the DRL baseline. As far as we know, this is the
first work that studies the feasibility of an explanation-guided DRL approach
in the context of 6G networks.Comment: 6 Pages, 6 figure
SliceOps: Explainable MLOps for Streamlined Automation-Native 6G Networks
Sixth-generation (6G) network slicing is the backbone of future
communications systems. It inaugurates the era of extreme ultra-reliable and
low-latency communication (xURLLC) and pervades the digitalization of the
various vertical immersive use cases. Since 6G inherently underpins artificial
intelligence (AI), we propose a systematic and standalone slice termed SliceOps
that is natively embedded in the 6G architecture, which gathers and manages the
whole AI lifecycle through monitoring, re-training, and deploying the machine
learning (ML) models as a service for the 6G slices. By leveraging machine
learning operations (MLOps) in conjunction with eXplainable AI (XAI), SliceOps
strives to cope with the opaqueness of black-box AI using explanation-guided
reinforcement learning (XRL) to fulfill transparency, trustworthiness, and
interpretability in the network slicing ecosystem. This article starts by
elaborating on the architectural and algorithmic aspects of SliceOps. Then, the
deployed cloud-native SliceOps working is exemplified via a latency-aware
resource allocation problem. The deep RL (DRL)-based SliceOps agents within
slices provide AI services aiming to allocate optimal radio resources and
impede service quality degradation. Simulation results demonstrate the
effectiveness of SliceOps-driven slicing. The article discusses afterward the
SliceOps challenges and limitations. Finally, the key open research directions
corresponding to the proposed approach are identified.Comment: 8 pages, 6 Figure
Joint Explainability and Sensitivity-Aware Federated Deep Learning for Transparent 6G RAN Slicing
In recent years, wireless networks are evolving complex, which upsurges the
use of zero-touch artificial intelligence (AI)-driven network automation within
the telecommunication industry. In particular, network slicing, the most
promising technology beyond 5G, would embrace AI models to manage the complex
communication network. Besides, it is also essential to build the
trustworthiness of the AI black boxes in actual deployment when AI makes
complex resource management and anomaly detection. Inspired by closed-loop
automation and Explainable Artificial intelligence (XAI), we design an
Explainable Federated deep learning (FDL) model to predict per-slice RAN
dropped traffic probability while jointly considering the sensitivity and
explainability-aware metrics as constraints in such non-IID setup. In precise,
we quantitatively validate the faithfulness of the explanations via the
so-called attribution-based \emph{log-odds metric} that is included as a
constraint in the run-time FL optimization task. Simulation results confirm its
superiority over an unconstrained integrated-gradient (IG) \emph{post-hoc} FDL
baseline.Comment: 6 Figure. arXiv admin note: substantial text overlap with
arXiv:2307.09494, arXiv:2210.10147, arXiv:2307.1290
Signal-Level Cooperative Spatial Multiplexing for Uplink Throughput Enhancement in MIMO Broadband Systems
International audienceIn this paper, we address the issue of throughputefficient half-duplex constrained relaying schemes for broadband uplink transmissions over multiple-input multiple-output (MIMO) channels. We introduce a low complexity signal-level cooperative spatial multiplexing (CM) architecture that allows for the shortening of the relaying phase without resorting to any symbol detection or re-mapping at the relay side. Half-duplex latency is thereby reduced, resulting in a remarkable throughput gain compared to amplify-and-forward (AF) relaying scheme. Surprisingly, we show that CM strategy becomes more powerful in boosting uplink throughput as the relay approaches cell edge